Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 15 de 15
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Cogn Emot ; : 1-21, 2024 Mar 01.
Artigo em Inglês | MEDLINE | ID: mdl-38427396

RESUMO

Social anxiety is characterised by fear of negative evaluation and negative perceptual biases; however, the cognitive mechanisms underlying these negative biases are not well understood. We investigated a possible mechanism which could maintain negative biases: altered adaptation to emotional faces. Heightened sensitivity to negative emotions could result from weakened adaptation to negative emotions, strengthened adaptation to positive emotions, or both mechanisms. We measured adaptation from repeated exposure to either positive or negative emotional faces, in individuals high versus low in social anxiety. We quantified adaptation strength by calculating the point of subjective equality (PSE) before and after adaptation for each participant. We hypothesised: (1) weaker adaptation to angry vs happy faces in individuals high in social anxiety, (2) no difference in adaptation to angry vs happy faces in individuals low in social anxiety, and (3) no difference in adaptation to sad vs happy faces in individuals high in social anxiety. Our results revealed a weaker adaptation to angry compared to happy faces in individuals high in social anxiety (Experiment 1), with no such difference in individuals low in social anxiety (Experiment 1), and no difference in adaptation strength to sad vs happy faces in individuals high in social anxiety (Experiment 2).

2.
J Exp Child Psychol ; 241: 105856, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38306737

RESUMO

Sound-shape correspondence refers to the preferential mapping of information across the senses, such as associating a nonsense word like bouba with rounded abstract shapes and kiki with spiky abstract shapes. Here we focused on audio-tactile (AT) sound-shape correspondences between nonsense words and abstract shapes that are felt but not seen. Despite previous research indicating a role for visual experience in establishing AT associations, it remains unclear how visual experience facilitates AT correspondences. Here we investigated one hypothesis: seeing the abstract shapes improve haptic exploration by (a) increasing effective haptic strategies and/or (b) decreasing ineffective haptic strategies. We analyzed five haptic strategies in video-recordings of 6- to 8-year-old children obtained in a previous study. We found the dominant strategy used to explore shapes differed based on visual experience. Effective strategies, which provide information about shape, were dominant in participants with prior visual experience, whereas ineffective strategies, which do not provide information about shape, were dominant in participants without prior visual experience. With prior visual experience, poking-an effective and efficient strategy-was dominant, whereas without prior visual experience, uncategorizable and ineffective strategies were dominant. These findings suggest that prior visual experience of abstract shapes in 6- to 8-year-olds can increase the effectiveness and efficiency of haptic exploration, potentially explaining why prior visual experience can increase the strength of AT sound-shape correspondences.


Assuntos
Tecnologia Háptica , Visão Ocular , Criança , Humanos , Tato , Som , Emoções
3.
J Exp Child Psychol ; 209: 105167, 2021 09.
Artigo em Inglês | MEDLINE | ID: mdl-33915481

RESUMO

Sound-shape crossmodal correspondence, the naturally occurring associations between abstract visual shapes and nonsense sounds, is one aspect of multisensory processing that strengthens across early childhood. Little is known regarding whether school-aged children exhibit other variants of sound-shape correspondences such as audio-tactile (AT) associations between tactile shapes and nonsense sounds. Based on previous research in blind individuals suggesting the role of visual experience in establishing sound-shape correspondence, we hypothesized that children would show weaker AT association than adults and that children's AT association would be enhanced with visual experience of the shapes. In Experiment 1, we showed that, when asked to match shapes explored haptically via touch to nonsense words, 6- to 8-year-olds exhibited inconsistent AT associations, whereas older children and adults exhibited the expected AT associations, despite robust audio-visual (AV) associations found across all age groups in a related study. In Experiment 2, we confirmed the role of visual experience in enhancing AT association; here, 6- to 8-year-olds could exhibit the expected AT association if first exposed to the AV condition, whereas adults showed the expected AT association irrespective of whether the AV condition was tested first or second. Our finding suggests that AT sound-shape correspondence is weak early in development relative to AV sound-shape correspondence, paralleling previous findings on the development of other types of multisensory associations. The potential role of visual experience in the development of sound-shape correspondences in other senses is discussed.


Assuntos
Percepção do Tato , Tato , Adolescente , Adulto , Criança , Pré-Escolar , Emoções , Humanos , Som
4.
Atten Percept Psychophys ; 82(8): 3973-3992, 2020 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-32935292

RESUMO

Correctly assessing the emotional state of others is a crucial part of social interaction. While facial expressions provide much information, faces are often not viewed in isolation, but occur with concurrent sounds, usually voices, which also provide information about the emotion being portrayed. Many studies have examined the crossmodal processing of faces and sounds, but results have been mixed, with different paradigms yielding different results. Using a psychophysical adaptation paradigm, we carried out a series of four experiments to determine whether there is a perceptual advantage when faces and voices match in emotion (congruent), versus when they do not match (incongruent). We presented a single face and a crowd of voices, a crowd of faces and a crowd of voices, a single face of reduced salience and a crowd of voices, and tested this last condition with and without attention directed to the emotion in the face. While we observed aftereffects in the hypothesized direction (adaptation to faces conveying positive emotion yielded negative, contrastive, perceptual aftereffects), we only found a congruent advantage (stronger adaptation effects) when faces were attended and of reduced salience, in line with the theory of inverse effectiveness.


Assuntos
Emoções , Voz , Atenção , Expressão Facial , Humanos , Percepção Visual
5.
Vision (Basel) ; 4(1)2020 Feb 05.
Artigo em Inglês | MEDLINE | ID: mdl-32033350

RESUMO

: While previous research has investigated key factors contributing to multisensory integration in isolation, relatively little is known regarding how these factors interact, especially when considering the enhancement of visual contrast sensitivity by a task-irrelevant sound. Here we explored how auditory stimulus properties, namely salience and temporal phase coherence in relation to the visual target, jointly affect the extent to which a sound can enhance visual contrast sensitivity. Visual contrast sensitivity was measured by a psychophysical task, where human adult participants reported the location of a visual Gabor pattern presented at various contrast levels. We expected the most enhanced contrast sensitivity, the lowest contrast threshold, when the visual stimulus was accompanied by a task-irrelevant sound, weak in auditory salience, modulated in-phase with the visual stimulus (strong temporal phase coherence). Our expectations were confirmed, but only if we accounted for individual differences in optimal auditory salience level to induce maximal multisensory enhancement effects. Our findings highlight the importance of interactions between temporal phase coherence and stimulus effectiveness in determining the strength of multisensory enhancement of visual contrast as well as highlighting the importance of accounting for individual differences.

6.
Brain Sci ; 9(8)2019 Jul 25.
Artigo em Inglês | MEDLINE | ID: mdl-31349644

RESUMO

One source of information we glean from everyday experience, which guides social interaction, is assessing the emotional state of others. Emotional state can be expressed through several modalities: body posture or movements, body odor, touch, facial expression, or the intonation in a voice. Much research has examined emotional processing within one sensory modality or the transfer of emotional processing from one modality to another. Yet, less is known regarding interactions across different modalities when perceiving emotions, despite our common experience of seeing emotion in a face while hearing the corresponding emotion in a voice. Our study examined if visual and auditory emotions of matched valence (congruent) conferred stronger perceptual and physiological effects compared to visual and auditory emotions of unmatched valence (incongruent). We quantified how exposure to emotional faces and/or voices altered perception using psychophysics and how it altered a physiological proxy for stress or arousal using salivary cortisol. While we found no significant advantage of congruent over incongruent emotions, we found that changes in cortisol were associated with perceptual changes. Following exposure to negative emotional content, larger decreases in cortisol, indicative of less stress, correlated with more positive perceptual after-effects, indicative of stronger biases to see neutral faces as happier.

7.
J Vis ; 17(3): 20, 2017 03 01.
Artigo em Inglês | MEDLINE | ID: mdl-28355632

RESUMO

We used a cross-modal dual task to examine how changing visual-task demands influenced auditory processing, namely auditory thresholds for amplitude- and frequency-modulated sounds. Observers had to attend to two consecutive intervals of sounds and report which interval contained the auditory stimulus that was modulated in amplitude (Experiment 1) or frequency (Experiment 2). During auditory-stimulus presentation, observers simultaneously attended to a rapid sequential visual presentation-two consecutive intervals of streams of visual letters-and had to report which interval contained a particular color (low load, demanding less attentional resources) or, in separate blocks of trials, which interval contained more of a target letter (high load, demanding more attentional resources). We hypothesized that if attention is a shared resource across vision and audition, an easier visual task should free up more attentional resources for auditory processing on an unrelated task, hence improving auditory thresholds. Auditory detection thresholds were lower-that is, auditory sensitivity was improved-for both amplitude- and frequency-modulated sounds when observers engaged in a less demanding (compared to a more demanding) visual task. In accord with previous work, our findings suggest that visual-task demands can influence the processing of auditory information on an unrelated concurrent task, providing support for shared attentional resources. More importantly, our results suggest that attending to information in a different modality, cross-modal attention, can influence basic auditory contrast sensitivity functions, highlighting potential similarities between basic mechanisms for visual and auditory attention.


Assuntos
Atenção/fisiologia , Percepção Auditiva/fisiologia , Limiar Auditivo/fisiologia , Sensibilidades de Contraste/fisiologia , Percepção Visual/fisiologia , Estimulação Acústica , Adulto , Feminino , Humanos , Masculino , Som , Adulto Jovem
9.
Front Psychol ; 7: 1468, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-27733839

RESUMO

Faces drive our social interactions. A vast literature suggests an interaction between gender and emotional face perception, with studies using different methodologies demonstrating that the gender of a face can affect how emotions are processed. However, how different is our perception of affective male and female faces? Furthermore, how does our current affective state when viewing faces influence our perceptual biases? We presented participants with a series of faces morphed along an emotional continuum from happy to angry. Participants judged each face morph as either happy or angry. We determined each participant's unique emotional 'neutral' point, defined as the face morph judged to be perceived equally happy and angry, separately for male and female faces. We also assessed how current state affect influenced these perceptual neutral points. Our results indicate that, for both male and female participants, the emotional neutral point for male faces is perceptually biased to be happier than for female faces. This bias suggests that more happiness is required to perceive a male face as emotionally neutral, i.e., we are biased to perceive a male face as more negative. Interestingly, we also find that perceptual biases in perceiving female faces are correlated with current mood, such that positive state affect correlates with perceiving female faces as happier, while we find no significant correlation between negative state affect and the perception of facial emotion. Furthermore, we find reaction time biases, with slower responses for angry male faces compared to angry female faces.

10.
Front Psychol ; 7: 1046, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-27471482

RESUMO

While some models of how various attributes of a face are processed have posited that face features, invariant physical cues such as gender or ethnicity as well as variant social cues such as emotion, may be processed independently (e.g., Bruce and Young, 1986), other models suggest a more distributed representation and interdependent processing (e.g., Haxby et al., 2000). Here, we use a contingent adaptation paradigm to investigate if mechanisms for processing the gender and emotion of a face are interdependent and symmetric across the happy-angry emotional continuum and regardless of the gender of the face. We simultaneously adapted participants to angry female faces and happy male faces (Experiment 1) or to happy female faces and angry male faces (Experiment 2). In Experiment 1, we found evidence for contingent adaptation, with simultaneous aftereffects in opposite directions: male faces were biased toward angry while female faces were biased toward happy. Interestingly, in the complementary Experiment 2, we did not find evidence for contingent adaptation, with both male and female faces biased toward angry. Our results highlight that evidence for contingent adaptation and the underlying interdependent face processing mechanisms that would allow for contingent adaptation may only be evident for certain combinations of face features. Such limits may be especially important in the case of social cues given how maladaptive it may be to stop responding to threatening information, with male angry faces considered to be the most threatening. The underlying neuronal mechanisms that could account for such asymmetric effects in contingent adaptation remain to be elucidated.

11.
J Neurophysiol ; 105(3): 1258-65, 2011 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-21228306

RESUMO

Faced with an overwhelming amount of sensory information, we are able to prioritize the processing of select spatial locations and visual features. The neuronal mechanisms underlying such spatial and feature-based selection have been studied in considerable detail. More recent work shows that attention can also be allocated to objects, even spatially superimposed objects composed of dynamically changing features that must be integrated to create a coherent object representation. Much less is known about the mechanisms underlying such object-based selection. Our goal was to investigate behavioral and neuronal responses when attention was directed to one of two objects, specifically one of two superimposed transparent surfaces, in a task designed to preclude space-based and feature-based selection. We used functional magnetic resonance imaging (fMRI) to measure changes in blood oxygen level-dependent (BOLD) signals when attention was deployed to one or the other surface. We found that visual areas V1, V2, V3, V3A, and MT+ showed enhanced BOLD responses to translations of an attended relative to an unattended surface. These results reveal that visual areas as early as V1 can be modulated by attending to objects, even objects defined by dynamically changing elements. This provides definitive evidence in humans that early visual areas are involved in a seemingly high-order process. Furthermore, our results suggest that these early visual areas may participate in object-specific feature "binding," a process that seemingly must occur for an object or a surface to be the unit of attentional selection.


Assuntos
Atenção/fisiologia , Sinais (Psicologia) , Potenciais Evocados Visuais/fisiologia , Reconhecimento Visual de Modelos/fisiologia , Córtex Visual/fisiologia , Adaptação Fisiológica/fisiologia , Adulto , Feminino , Humanos , Masculino
12.
J Neurophysiol ; 98(4): 2399-413, 2007 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-17715196

RESUMO

Attending to a visual or auditory stimulus often requires irrelevant information to be filtered out, both within the modality attended and in other modalities. For example, attentively listening to a phone conversation can diminish our ability to detect visual events. We used functional magnetic resonance imaging (fMRI) to examine brain responses to visual and auditory stimuli while subjects attended visual or auditory information. Although early cortical areas are traditionally considered unimodal, we found that brain responses to the same ignored information depended on the modality attended. In early visual area V1, responses to ignored visual stimuli were weaker when attending to another visual stimulus, compared with attending to an auditory stimulus. The opposite was true in more central visual area MT+, where responses to ignored visual stimuli were weaker when attending to an auditory stimulus. Furthermore, fMRI responses to the same ignored visual information depended on the location of the auditory stimulus, with stronger responses when the attended auditory stimulus shared the same side of space as the ignored visual stimulus. In early auditory cortex, responses to ignored auditory stimuli were weaker when attending a visual stimulus. A simple parameterization of our data can describe the effects of redirecting attention across space within the same modality (spatial attention) or across modalities (cross-modal attention), and the influence of spatial attention across modalities (cross-modal spatial attention). Our results suggest that the representation of unattended information depends on whether attention is directed to another stimulus in the same modality or the same region of space.


Assuntos
Atenção/fisiologia , Córtex Auditivo/fisiologia , Percepção Espacial/fisiologia , Córtex Visual/fisiologia , Estimulação Acústica , Adulto , Sinais (Psicologia) , Discriminação Psicológica/fisiologia , Movimentos Oculares/fisiologia , Feminino , Fixação Ocular , Lateralidade Funcional/fisiologia , Humanos , Imageamento por Ressonância Magnética , Masculino , Estimulação Luminosa , Desempenho Psicomotor/fisiologia , Psicofísica
13.
Proc Natl Acad Sci U S A ; 103(51): 19552-7, 2006 Dec 19.
Artigo em Inglês | MEDLINE | ID: mdl-17164335

RESUMO

We used psychophysical and functional MRI (fMRI) adaptation to examine how and where the visual configural cues underlying identification of facial ethnicity, gender, and identity are processed. We found that the cortical regions showing selectivity to these cues are distributed widely across the inferior occipital cortex, fusiform areas, and the cingulate gyrus. These regions were not colocalized with areas activated by traditional face area localizer scans. Traditional face area localizer scans isolate regions defined by stronger fMRI responses to a random series of face images than to a series of non-face images. Because these scans present a random assortment of face images, they presumably produce the strongest responses within regions containing neurons that are face-sensitive but not highly tuned for face type. These areas might be expected to show only weak selective adaptation effects. In contrast, the largest responses to our selective adaptation paradigm would be expected within areas containing more selectively tuned neurons that might be expected to show only a sparse collective response to a series of random faces. Many aspects of face processing (e.g., prosopagnosia, recognition, and configural vs. featural processing) are likely to rely heavily on regions containing high proportions of neurons that show selective tuning for faces.


Assuntos
Córtex Cerebral/fisiologia , Etnicidade , Face , Individualidade , Reconhecimento Visual de Modelos/fisiologia , Caracteres Sexuais , Humanos , Imageamento por Ressonância Magnética , Psicofísica
14.
Vision Res ; 46(18): 2968-76, 2006 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-16698060

RESUMO

Previous studies have shown that attention to a particular stimulus feature, such as direction of motion or color, enhances neuronal responses to unattended stimuli sharing that feature. We studied this effect psychophysically by measuring the strength of the motion aftereffect (MAE) induced by an unattended stimulus when attention was directed to one of two overlapping fields of moving dots in a different spatial location. When attention was directed to the same direction of motion as the unattended stimulus, the unattended stimulus induced a stronger MAE than when attention was directed to the opposite direction. Also, when the unattended location contained either uncorrelated motion or had no stimulus at all an MAE was induced in the opposite direction to the attended direction of motion. The strength of the MAE was similar regardless of whether subjects attended to the speed or luminance of the attended dots. These results provide further support for a global feature-based mechanism of attention, and show that the effect spreads across all features of an attended object, and to all locations of visual space.


Assuntos
Atenção/fisiologia , Pós-Efeito de Figura/fisiologia , Percepção de Movimento/fisiologia , Adulto , Percepção de Cores/fisiologia , Discriminação Psicológica/fisiologia , Humanos , Iluminação , Reconhecimento Visual de Modelos/fisiologia , Estimulação Luminosa/métodos , Psicofísica , Campos Visuais/fisiologia
15.
Vis Neurosci ; 20(6): 687-701, 2003.
Artigo em Inglês | MEDLINE | ID: mdl-15088720

RESUMO

Lesion or inactivation of the superior colliculus (SC) of the cat results in an animal that fails to orient toward peripheral visual stimuli which normally evoke a brisk, reflexive orienting response. A failure to orient toward a visual stimulus could be the result of a sensory impairment (a failure to detect the visual stimulus) or a motor impairment (an inability to generate the orienting response). Either mechanism could explain the deficit observed during SC inactivation since neurons in the SC can carry visual sensory signals as well as motor commands involved in the generation of head and eye movements. We investigated the effects of SC inactivation in the cat in two ways. First, we tested cats in a visual detection task that required the animals to press a central, stationary foot pedal to indicate detection of a peripheral visual stimulus. Such a motor response does not involve any components of the orienting response and is unlikely to depend on SC motor commands. A deficit in this task would indicate that the SC plays an important role in the detection of visual targets even in a task that does not require visual orienting. Second, to further investigate the visual orienting deficit observed during SC inactivation and to make direct comparisons between detection and orienting performance, we tested cats in a standard perimetry paradigm. Performance in both tasks was tested following focal inactivation of the SC with microinjections of muscimol at various depths and rostral/caudal locations throughout the SC. Our results reveal a dramatic deficit in both the visual detection task and the visual orienting task following inactivation of the SC with muscimol.


Assuntos
Transtornos da Percepção/fisiopatologia , Colículos Superiores/fisiopatologia , Percepção Visual/fisiologia , Animais , Gatos , Agonistas GABAérgicos/farmacologia , Masculino , Muscimol/farmacologia , Colículos Superiores/efeitos dos fármacos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...